90 research outputs found

    BayesNAS: A Bayesian Approach for Neural Architecture Search

    Get PDF
    One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an over-parameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this paper, we employ the classic Bayesian learning approach to alleviate these two issues by modeling architecture parameters using hierarchical automatic relevance determination (HARD) priors. Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture. Impressively, this enabled us to find the architecture on CIFAR-10 within only 0.2 GPU days using a single GPU. Competitive performance can be also achieved by transferring to ImageNet. As a byproduct, our approach can be applied directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration.Comment: International Conference on Machine Learning 201

    A Sparse Bayesian Deep Learning Approach for Identification of Cascaded Tanks Benchmark

    Get PDF
    Nonlinear system identification is important with a wide range of applications. The typical approaches for nonlinear system identification include Volterra series models, nonlinear autoregressive with exogenous inputs models, block-structured models, state-space models and neural network models. Among them, neural networks (NN) is an important black-box method thanks to its universal approximation capability and less dependency on prior information. However, there are several challenges associated with NN. The first one lies in the design of a proper neural network structure. A relatively simple network cannot approximate the feature of the system, while a complex model may lead to overfitting. The second lies in the availability of data for some nonlinear systems. For some systems, it is difficult to collect enough data to train a neural network. This raises the challenge that how to train a neural network for system identification with a small dataset. In addition, if the uncertainty of the NN parameter could be obtained, it would be also beneficial for further analysis. In this paper, we propose a sparse Bayesian deep learning approach to address the above problems. Specifically, the Bayesian method can reinforce the regularization on neural networks by introducing introduced sparsity-inducing priors. The Bayesian method can also compute the uncertainty of the NN parameter. An efficient iterative re-weighted algorithm is presented in this paper. We also test the capacity of our method to identify the system on various ratios of the original dataset. The one-step-ahead prediction experiment on Cascaded Tank System shows the effectiveness of our method. Furthermore, we test our algorithm with more challenging simulation experiment on this benchmark, which also outperforms other methods

    Prediction of TF-binding site by inclusion of higher order position dependencies

    Get PDF
    Most proposed methods for TF-binding site (TFBS) predictions only use low order dependencies for predictions due to the lack of efficient methods to extract higher order dependencies. In this work, We first propose a novel method to extract higher order dependencies by applying CNN on histone modification features. We then propose a novel TFBS prediction method, referred to as CNN_TF, by incorporating low order and higher order dependencies. CNN_TF is first evaluated on 13 TFs in the mES cell. Results show that using higher order dependencies outperforms low order dependencies significantly on 11 TFs. This indicates that higher order dependencies are indeed more effective for TFBS predictions than low order dependencies. Further experiments show that using both low order dependencies and higher order dependencies improves performance significantly on 12 TFs, indicating the two dependency types are complementary. To evaluate the influence of cell-types on prediction performances, CNN_TF was applied to five TFs in five cell-types of humans. Even though low order dependencies and higher order dependencies show different contributions in different cell-types, they are always complementary in predictions. When comparing to several state-of-the-art methods, CNN_TF outperforms them by at least 5.3% in AUPR

    Identifying DNA-binding proteins by combining support vector machine and PSSM distance transformation

    Get PDF
    Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/

    MTTFsite : cross-cell-type TF binding site prediction by using multi-task learning

    Get PDF
    Motivation The prediction of transcription factor binding sites (TFBSs) is crucial for gene expression analysis. Supervised learning approaches for TFBS predictions require large amounts of labeled data. However, many TFs of certain cell types either do not have sufficient labeled data or do not have any labeled data. Results In this paper, a multi-task learning framework (called MTTFsite) is proposed to address the lack of labeled data problem by leveraging on labeled data available in cross-cell types. The proposed MTTFsite contains a shared CNN to learn common features for all cell types and a private CNN for each cell type to learn private features. The common features are aimed to help predicting TFBSs for all cell types especially those cell types that lack labeled data. MTTFsite is evaluated on 241 cell type TF pairs and compared with a baseline method without using any multi-task learning model and a fully shared multi-task model that uses only a shared CNN and do not use private CNNs. For cell types with insufficient labeled data, results show that MTTFsite performs better than the baseline method and the fully shared model on more than 89% pairs. For cell types without any labeled data, MTTFsite outperforms the baseline method and the fully shared model by more than 80 and 93% pairs, respectively. A novel gene expression prediction method (called TFChrome) using both MTTFsite and histone modification features is also presented. Results show that TFBSs predicted by MTTFsite alone can achieve good performance. When MTTFsite is combined with histone modification features, a significant 5.7% performance improvement is obtained

    A Family of High Step-Up Coupled-Inductor Impedance-Source Inverters With Reduced Switching Spikes

    Get PDF

    High Step-Up Y-Source Inverter with Reduced DC-Link Voltage Spikes

    Get PDF

    Motor Sequence Learning Is Associated With Hippocampal Subfield Volume in Humans With Medial Temporal Lobe Epilepsy

    Get PDF
    Objectives: Medial temporal lobe epilepsy (mTLE) is characterized by decreased hippocampal volume, which results in motor memory consolidation impairments. However, the extent to which motor memory acquisition are affected in humans with mTLE remains poorly understood. We therefore examined the extent to which learning of a motor tapping sequence task is affected by mTLE.Methods: MRI volumetric analysis was performed using a T1-weighted three-dimensional gradient echo sequence in 15 patients with right mTLE and 15 control subjects. Subjects trained on a motor sequence tapping task with the left hand in right mTLE and non-dominant hand in neurologically-intact controls.Results: The number of correct sequences performed by the mTLE patient group increased after training, albeit to a lesser extent than the control group. Although hippocampal subfield volume was reduced in mTLE relative to controls, no differences were observed in the volumes of other brain areas including thalamus, caudate, putamen and amygdala. Correlations between hippocampal subfield volumes and the change in pre- to post-training performance indicated that the volume of hippocampal subfield CA2–3 was associated with motor sequence learning in patients with mTLE.Significance: These results provide evidence that individuals with mTLE exhibit learning on a motor sequence task. Learning is linked to the volume of hippocampal subfield CA2–3, supporting a role of the hippocampus in motor memory acquisition.Highlights-Humans with mTLE exhibit learning on a motor tapping sequence task but not to the same extent as neurologically-intact controls.-Hippocampal subfield volumes are significantly reduced after mTLE. Surrounding brain area volumes do not show abnormalities.-Hippocampal subfield CA2–3 volume is associated with motor sequence learning in humans with mTLE

    Robust stochastic optimal dispatching of integrated electricity-gas-heat systems with improved integrated demand response

    Get PDF
    In recent years, integrated energy systems with deep coupling of power, natural gas, and heat energy have gradually attracted extensive attention. The increasing issue of uncertainty in the generation and load of an integrated electric-gas-heat system (IEGHS) is a growing prominent problem. The effective implementation of demand response (DR) programs is an important way to solve this problem in the IEGHS. In this paper, a robust stochastic optimal dispatching method for an IEGHS with integrated DR (IDR) under multiple uncertainties is proposed. A robust adjustable uncertainty set is adopted to deal with the uncertainty of wind power. The Wasserstein generative adversarial network based on gradient normalization is proposed to generate load-side demand scenarios. Furthermore, an improved IDR model that considers the peak-valley difference cost of the electricity-gas-heat load is proposed. Finally, the simulation analysis successfully demonstrates the efficacy of the proposed model. By utilizing this approach, the system can obtain a scheduling scheme with the lowest operating cost even under worst-case scenarios
    corecore